机器学习(ML)方法可以有效地分析数据,识别其中的模式并做出高质量的预测。良好的预测通常与无法以人类可读的方式呈现检测到的模式一起进行。技术发展最近导致了可解释的人工智能(XAI)技术,旨在打开此类黑盒,并使人类能够从检测到的模式中获得新的见解。我们研究了XAI在特定见解可能对消费者行为产生重大影响的领域,即用电。知道对个人电力消耗的特定反馈触发了资源保护,我们从电力消耗时间序列中创建了五个可视化,考虑到现有的特定领域的设计知识,从电力消耗时间序列中获得了五个可视化。我们对152名参与者进行的实验评估表明,人类可以通过XAI可视化表现出的模式来吸收,但是这种可视化应遵循已知的可视化模式,以便用户妥善理解。
translated by 谷歌翻译
机器学习的进步(ML)引起了人们对这项技术支持决策的浓厚兴趣。尽管复杂的ML模型提供的预测通常比传统工具的预测更准确,但这种模型通常隐藏了用户预测背后的推理,这可能导致采用和缺乏洞察力。在这种张力的激励下,研究提出了可解释的人工智能(XAI)技术,这些技术发现了ML发现的模式。尽管ML和XAI都有很高的希望,但几乎没有经验证据表明传统企业的好处。为此,我们分析了220,185家能源零售商的客户的数据,预测具有多达86%正确性的交叉购买(AUC),并表明XAI方法的Shap提供了为实际买家提供的解释。我们进一步概述了信息系统,XAI和关系营销中的研究的影响。
translated by 谷歌翻译
机器学习(ML)应用程序的数据量不断增长。不仅是观察的数量,特别是测量变量的数量(特征)增加了持续的数字化。选择最适合预测建模的功能是ML在商业和研究中取得成功的重要杠杆。特征选择方法(FSM)独立于某种ML算法 - 所谓的过滤方法 - 已毫无意义地建议,但研究人员和定量建模的指导很少,以选择典型ML问题的适当方法。本次审查在特征选择基准上综合了大量文献,并评估了58种方法在广泛使用的R环境中的性能。对于具体的指导,我们考虑了四种典型的数据集方案,这些情况挑战ML模型(嘈杂,冗余,不平衡数据和具有比观察特征更多的案例)。绘制早期基准的经验,该基准测试较少的FSMS,我们根据四个标准进行比较方法的性能(预测性能,所选的相关功能数,功能集和运行时的稳定性)。我们发现依赖于随机森林方法的方法,双输入对称相关滤波器(浪费)和联合杂质滤波器(Jim)是给定的数据集方案的良好性候选方法。
translated by 谷歌翻译
This paper addresses the kinodynamic motion planning for non-holonomic robots in dynamic environments with both static and dynamic obstacles -- a challenging problem that lacks a universal solution yet. One of the promising approaches to solve it is decomposing the problem into the smaller sub problems and combining the local solutions into the global one. The crux of any planning method for non-holonomic robots is the generation of motion primitives that generates solutions to local planning sub-problems. In this work we introduce a novel learnable steering function (policy), which takes into account kinodynamic constraints of the robot and both static and dynamic obstacles. This policy is efficiently trained via the policy optimization. Empirically, we show that our steering function generalizes well to unseen problems. We then plug in the trained policy into the sampling-based and lattice-based planners, and evaluate the resultant POLAMP algorithm (Policy Optimization that Learns Adaptive Motion Primitives) in a range of challenging setups that involve a car-like robot operating in the obstacle-rich parking-lot environments. We show that POLAMP is able to plan collision-free kinodynamic trajectories with success rates higher than 92%, when 50 simultaneously moving obstacles populate the environment showing better performance than the state-of-the-art competitors.
translated by 谷歌翻译
Consensus clustering aggregates partitions in order to find a better fit by reconciling clustering results from different sources/executions. In practice, there exist noise and outliers in clustering task, which, however, may significantly degrade the performance. To address this issue, we propose a novel algorithm -- robust consensus clustering that can find common ground truth among experts' opinions, which tends to be minimally affected by the bias caused by the outliers. In particular, we formalize the robust consensus clustering problem as a constraint optimization problem, and then derive an effective algorithm upon alternating direction method of multipliers (ADMM) with rigorous convergence guarantee. Our method outperforms the baselines on benchmarks. We apply the proposed method to the real-world advertising campaign segmentation and forecasting tasks using the proposed consensus clustering results based on the similarity computed via Kolmogorov-Smirnov Statistics. The accurate clustering result is helpful for building the advertiser profiles so as to perform the forecasting.
translated by 谷歌翻译
In computational advertising, a challenging problem is how to recommend the bid for advertisers to achieve the best return on investment (ROI) given budget constraint. This paper presents a bid recommendation scenario that discovers the concavity changes in click prediction curves. The recommended bid is derived based on the turning point from significant increase (i.e. concave downward) to slow increase (convex upward). Parametric learning based method is applied by solving the corresponding constraint optimization problem. Empirical studies on real-world advertising scenarios clearly demonstrate the performance gains for business metrics (including revenue increase, click increase and advertiser ROI increase).
translated by 谷歌翻译
In cost-per-click (CPC) or cost-per-impression (CPM) advertising campaigns, advertisers always run the risk of spending the budget without getting enough conversions. Moreover, the bidding on advertising inventory has few connections with propensity one that can reach to target cost-per-acquisition (tCPA) goals. To address this problem, this paper presents a bid optimization scenario to achieve the desired tCPA goals for advertisers. In particular, we build the optimization engine to make a decision by solving the rigorously formalized constrained optimization problem, which leverages the bid landscape model learned from rich historical auction data using non-parametric learning. The proposed model can naturally recommend the bid that meets the advertisers' expectations by making inference over advertisers' historical auction behaviors, which essentially deals with the data challenges commonly faced by bid landscape modeling: incomplete logs in auctions, and uncertainty due to the variation and fluctuations in advertising bidding behaviors. The bid optimization model outperforms the baseline methods on real-world campaigns, and has been applied into a wide range of scenarios for performance improvement and revenue liftup.
translated by 谷歌翻译
Heuristic search algorithms, e.g. A*, are the commonly used tools for pathfinding on grids, i.e. graphs of regular structure that are widely employed to represent environments in robotics, video games etc. Instance-independent heuristics for grid graphs, e.g. Manhattan distance, do not take the obstacles into account and, thus, the search led by such heuristics performs poorly in the obstacle-rich environments. To this end, we suggest learning the instance-dependent heuristic proxies that are supposed to notably increase the efficiency of the search. The first heuristic proxy we suggest to learn is the correction factor, i.e. the ratio between the instance independent cost-to-go estimate and the perfect one (computed offline at the training phase). Unlike learning the absolute values of the cost-to-go heuristic function, which was known before, when learning the correction factor the knowledge of the instance-independent heuristic is utilized. The second heuristic proxy is the path probability, which indicates how likely the grid cell is lying on the shortest path. This heuristic can be utilized in the Focal Search framework as the secondary heuristic, allowing us to preserve the guarantees on the bounded sub-optimality of the solution. We learn both suggested heuristics in a supervised fashion with the state-of-the-art neural networks containing attention blocks (transformers). We conduct a thorough empirical evaluation on a comprehensive dataset of planning tasks, showing that the suggested techniques i) reduce the computational effort of the A* up to a factor of $4$x while producing the solutions, which costs exceed the costs of the optimal solutions by less than $0.3$% on average; ii) outperform the competitors, which include the conventional techniques from the heuristic search, i.e. weighted A*, as well as the state-of-the-art learnable planners.
translated by 谷歌翻译
To mitigate climate change, the share of renewable needs to be increased. Renewable energies introduce new challenges to power grids due to decentralization, reduced inertia and volatility in production. The operation of sustainable power grids with a high penetration of renewable energies requires new methods to analyze the dynamic stability. We provide new datasets of dynamic stability of synthetic power grids and find that graph neural networks (GNNs) are surprisingly effective at predicting the highly non-linear target from topological information only. To illustrate the potential to scale to real-sized power grids, we demonstrate the successful prediction on a Texan power grid model.
translated by 谷歌翻译
Correct scoring of a driver's risk is of great significance to auto insurance companies. While the current tools used in this field have been proven in practice to be quite efficient and beneficial, we argue that there is still a lot of room for development and improvement in the auto insurance risk estimation process. To this end, we develop a framework based on a combination of a neural network together with a dimensionality reduction technique t-SNE (t-distributed stochastic neighbour embedding). This enables us to visually represent the complex structure of the risk as a two-dimensional surface, while still preserving the properties of the local region in the features space. The obtained results, which are based on real insurance data, reveal a clear contrast between the high and low risk policy holders, and indeed improve upon the actual risk estimation performed by the insurer. Due to the visual accessibility of the portfolio in this approach, we argue that this framework could be advantageous to the auto insurer, both as a main risk prediction tool and as an additional validation stage in other approaches.
translated by 谷歌翻译